Goto

Collaborating Authors

 data privacy law


Italy will lift ChatGPT ban if OpenAI fixes privacy issues

#artificialintelligence

Italy's data protection authority has said that it's willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI's ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy's data privacy laws and the EU's infamous General Data Protection Regulation (GDPR). The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.


Regulating AI Through Data Privacy

Stanford HAI

In the absence of a national data privacy law in the U.S., California has been more active than any other state in efforts to fill the gap on a state level. The state enacted one of the nation's first data privacy laws, the California Privacy Rights Act (Proposition 24) in 2020, and an additional law will take effect in 2023. A new state agency created by the law, the California Privacy Protection Agency, recently issued an invitation for public comment on the many open questions surrounding the law's implementation. Our team of Stanford researchers, graduate students, and undergraduates examined the proposed law and have concluded that data privacy can be a useful tool in regulating AI, but California's new law must be more narrowly tailored to prevent overreach, focus more on AI model transparency, and ensure people's rights to delete their personal information are not usurped by the use of AI. Additionally, we suggest that the regulation's proposed transparency provision requiring companies to explain to consumers the logic underlying their "automated decision making" processes could be more powerful if it instead focused on providing greater transparency about the data used to enable such processes. Finally, we argue that the data embedded in machine-learning models must be explicitly included when considering consumers' rights to delete, know, and correct their data.


How to Stay Compliant with Data Privacy Laws While Using AI

#artificialintelligence

The best organizations have integrated all their data onto a single base. This allows them to unify the data and understand their customers across the sea of touch points. Though chatbots, customer support lines, and online chats are all good ways to help your customer find the information and answers they need, they can still cause stress if not integrated. With data integration, employees can easily check a customer's past conversations right away, without asking them redundant questions. Automated messaging systems can be enhanced.


The EU's Artificial Intelligence Act: A Pragmatic Approach - Techonomy

#artificialintelligence

The European Union has introduced a proposal to regulate the development of AI, with the goal of protecting the rights and well-being of its citizens. The Artificial Intelligence Act (AIA) is designed to address certain potentially risky, high-stakes use cases of AI, including biometric surveillance, bank lending, test scoring, criminal justice, and behavior manipulation techniques, among others. The goal of the AIA is to regulate the development of these applications of AI in a way that will foster increased trust in its adoption. Similar to the EU's General Data Protection Regulation (GDPR), the AIA law will apply to anyone selling or providing relevant services to EU citizens. GDPR spearheaded data privacy regulations across the United States and around the world.


Interesting AI/ML Articles You Should Read This Week (Aug 15)

#artificialintelligence

Get an overview of the ever-changing face of Data privacy laws, and the prevalent loopholes that exist. Aren Carpenter casts a light on the importance of adaptability within data privacy regulation as he showcases notorious misses of the wide net governing bodies throw over the handling of personal and private data. Aren's article delves into the evolution of data privacy laws from HIPAA (1996) Omnibus Final Rule (2013) to the more current GDPR. Aren makes statements that allude to the ambiguity caused by the blurred lines and vague privacy laws that make it difficult for an organisation to navigate patient privacy boundaries. Nonetheless, the first section of this article illustrates an effort by government and regulating bodies to update data privacy laws in synchronicity with the advancement of technology and information gathering.


Judge: Facebook's $550 Million Settlement In Facial Recognition Case Is Not Enough

NPR Technology

Facebook in January agreed to a historic $550 million settlement over its face-identifying technology. But now, the federal judge overseeing the case is refusing the accept the deal. Facebook in January agreed to a historic $550 million settlement over its face-identifying technology. But now, the federal judge overseeing the case is refusing the accept the deal. Next week, lawyers for Facebook will be back in court, trying to convince a judge they should be allowed to settle a class action suit that accuses the company of violating users' privacy.


How Is The Banking Industry In Malaysia Adopting Data Science?

#artificialintelligence

Ratul has more than 11 years of experience in technology and advanced analytics, machine learning and AI systems, primarily focused on consumer and SME lending. Ratul worked in leading credit bureau Transunion Cibil in India, where he worked more on automation and machine learning using the power of Big data. Earlier Ratul also worked in Global data science leader SAS in its Research and Development office in Pune, India. Ratul holds a B.Tech from National Institute of Technology, Calicut (India). Analytics India Magazine: How important is Data science & AI within Banking Systems in Malaysia?


2017 Cybersecurity Predictions: Machine Learning and AI-Driven Frameworks Shape Cloud Security - Palo Alto Networks Blog

#artificialintelligence

This post is part of an ongoing blog series examining "Sure Things" (predictions that are almost guaranteed to happen) and "Long Shots" (predictions that are less likely to happen) in cybersecurity in 2017. In the last few years, the digital footprint of organizations has expanded beyond the confines of the on-premise data center and private cloud to a model that now incorporates SaaS and public clouds. To date, InfoSec teams have been in a reactive mode while trying to implement a comprehensive security strategy across their hybrid architecture. In 2017, we will see a concerted effort from InfoSec teams to build and roll out a multi-cloud security strategy geared toward addressing the emerging digital needs of their organizations. Maintaining a consistent security posture, pervasive visibility, and ease of security management across all clouds will drive security teams to extend their strategy beyond security considerations for public and private clouds and also focus on securely enabling SaaS applications.